New evidence for prelexical phonological processing in word recognition

نویسندگان

  • Emmanuel Dupoux
  • Christophe Pallier
  • Kazuhiko Kakehi
  • Jacques Mehler
  • Takao Fushimi
چکیده

When presented with stimuli that contain illegal consonant clusters, Japanese listeners tend to hear an illusory vowel that makes their perception conform to the phonotactics of their language. In a previous paper, we suggested that this effect arises from language-specific prelexical processes (Dupoux, Kakehi, Hirose, Pallier & Mehler, 1999). The present paper assesses the alternative hypothesis that this illusion is due to a "top-down" lexical effect. We manipulate the lexical neighborhood of non-words that contain illegal consonant clusters and show that perception of the illusory vowel is not due to lexical influences. This demonstrates that phonotactic knowledge influences speech processing at an early stage. *. Laboratoire de Sciences Cognitives et Psycholinguistique, CNRS-EHESS **. Graduate School of Human Informatics, Nagoya University, Japan +. International School for Advanced Studies, Trieste, Italy. We thank Nicolas Bernard and Takao Fushimi for their help in preparing the stimuli and running the experiment. We also thank James McQueen, Sharon Peperkamp and two anonymous reviewers for very useful comments on a previous version of this manuscript. Part of this work was presented at Eurospeech'99 (Dupoux, Fushimi, Kakehi and Mehler, 1999). Correspondence concerning this article should be addressed to Emmanuel Dupoux, LSCP, EHESS-CNRS, 54 bd Raspail, Paris, 75006. E-mail: [email protected]. Prelexical vowel epenthesis 2 Most models of spoken word recognition postulate that the acoustic signal is transformed into a prelexical representation, typically a string of phonemes, and that this representation is used to access the lexicon. Such models have to spell out how the acoustic signal is transformed into a prelexical representation and whether this representation is enhanced by one's lexical knowledge. Many studies have established that the mapping between the signal and the prelexical representation is not simple. In their famous "Perception of the Speech Code" paper, Liberman, Cooper, Shankweiler and Studdert-Kennedy (1967) stressed the complexity of the relationships between the acoustic signal and the phonetic message: neighbor phones interact so much that a single acoustic stretch is often ambiguous and requires a larger context to be interpretable (see, for example, Miller and Liberman, 1979; Mann and Repp, 1981; Whalen, 1989 ). Their proposed solution was that listeners use their knowledge of how speech sounds are produced to decode the speech signal (for example, to compensate for coarticulation). A second source of information is lexical knowledge. Indeed, numerous studies have demonstrated lexical influences on phoneme identification (e.g. Ganong, 1980; Samuel, 1981a, 1987; Frauenfelder, Segui, Dijkstra, 1990). The phenomenon of phonemic restoration attests that lexical knowledge can yield the perception of a phoneme that is not present in the signal (even if acoustics play an important role in the phenomenon; cf. Samuel 1981b). A third source of information that can be used by the speech perception apparatus is phonotactic knowledge. There exists some empirical evidence that listeners tend to assimilate illegal sequences of phonemes to legal ones (Massaro & Cohen, 1983; Hallé, Segui, Frauenfelder & Meunier, 1998). Thus, French listeners tend to hear the sequence /dl/, which is illegal in French, as /gl/, which is legal (Hallé et al. 1998). Among the three above cited sources of information, the influence of phonotactics is the less well established. Both the Massaro & Cohen and the Hallé et al. studies used stimuli in only one language. Therefore, it cannot be excluded that the observed effects were due to universal effects of compensation for coarticulation: it could be that /dl/ is universally harder to perceive than /gl/. A more convincing demonstration of phonotactic effects must involve a cross-linguistic manipulation. A second difficulty is the potential confound between phonotactic and lexical informations. It can be argued that nonwords containing illegal sequences of phonemes typically have fewer lexical neighbors than legal nonwords. As a matter of fact, McClelland and Elman (1986) interpreted the phonotactic effects of Massaro and Cohen as the result of top-down influences from the lexicon (a "lexical conspiracy" effect). They reported on unpublished data where an apparent phonotactic effect (the preference for /dw/ over /bw/ in nonwords with an ambiguous initial phoneme) was reversed in the presence of a strong lexical candidate: /?wacelet/, yielded the perception of /bwacelet/ (from bracelet) instead of /dwacelet/ (despite the illegality of the /bw/ cluster). The authors argued that the typical preference of /dw/ over /bw/ is due to the predominance of /dw/ words in the lexicon. They simulated these data, as well as Massaro & Cohen (1983) "phonotactic" result, with a "lexical conspiracy" effect in the TRACE model (but see Pitt & McQueen, 1998 for arguments against this interpretation). In this paper, we wish to revisit the relative role of phonotactics and lexical knowledge by building on an effect which has been well documented cross-linguistically: the perception of illusory vowels in Japanese. Dupoux, Kakehi, Hirose, Pallier & Mehler (1999) have demonstrated that Japanese listeners, but not French listeners, perceive an /u/ vowel between consonants forming Prelexical vowel epenthesis 3 illegal clusters in Japanese (e.g. between /b/ and /z/). These data show that the perceptual system of Japanese listeners insert an illusory vowel between adjacent consonants in order to conform to the expected pattern in this language. We called this phenomenon "vowel epenthesis". It suggests that the role of phonotactics is so important as to produce the illusion of a segment which is not actually present in the signal. Though Dupoux et al. (1999) attributed this effect to phonotactic knowledge, it is not a priori excluded that the illusion results from top-down lexical influences. One may imagine that many Japanese words contain sequences /C1uC2/ in which C1C2 represent consonant clusters present in the nonword stimuli used in the experiment. It could then be argued that the activations of such lexical items conspire to produce the perception of /u/. Thus, the potential existence of real Japanese words, phonetic neighbors of the non-word stimuli, may have induced participants to report a vowel that is not present in the signal. Some may find excessive the proposal that lexical effects can be as strong as to blindly insert a vowel that is not present in the signal. However, as we noted above, there are well documented demonstrations that the lexicon can fill in missing speech sounds, at least when the underlying signal is degraded or ambiguous (Warren, 1984; Samuel, 1981a). In the present case, the signal is clear, but it contains sequences of phonemes that are illegal for Japanese speakers. The influence of lexical knowledge in such a situation is an open question. This paper aims at determining the source of the vowel epenthesis effect in Japanese. We try to determine whether the illusory vowel is inserted during the first stage of speech processing, under the influence of phonotactic constraints, or whether it comes from the participants' lexical knowledge. To this aim, we created non-words containing illegal consonant clusters in Japanese. These items produce only one lexical neighbor when a vowel is inserted between the consonants. Specifically, for some items, the lexicon biases for the insertion of the vowel /u/ (like in sokdo -> sokudo, speed). In other items, the lexicon call for the insertion of a vowel other than /u/ to generate a word, (like mikdo -> mikado, emperor). How do Japanese listeners perceive these illegal nonwords? If perceptual processes insert the vowel /u/ within the /kd/ cluster irrespective of the lexical status of the outcome, then they should report hearing an /u/ inside both /sokdo/ and /mikdo/. If, in contrast, their perception is influenced by the nearest real Japanese word, we expect them to report an /u/ and an /a/ respectively.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

New evidence for phonological processing during visual word recognition: the case of Arabic.

Lexical decision and naming were examined with words and pseudowords in literary Arabic and with transliterations of words in a Palestinian dialect that has no written form. Although the transliterations were visually unfamiliar, they were not easily rejected in lexical decision, and they were more slowly accepted in phonologically based lexical decision. Naming transliterations of spoken words...

متن کامل

The locus of the orthographic consistency effect in speech recognition: a cross-linguistic study

To address this issue, orthographic effects must be considered as a function of the processing mechanisms tapped by the tasks. In auditory tasks involving metaphonological components, the influence of orthographic knowledge has been consistently observed when the relation between the phonological and the orthographic representations of the stimuli was manipulated. For example, judging that two ...

متن کامل

Word Recognition: Do We Need Phonological Representations?

Under what format(s) are spoken words memorized by the brain? Are word forms stored as abstract phonological representations? Or rather, are they stored as detailed acoustic-phonetic representations? (For example as a set of acoustic exemplars associated with each word). We present a series of experiments whose results point to the existence of prelexical phonological processes in word recognit...

متن کامل

Implicit Processing of Phonotactic Cues: Evidence from Electrophysiological and Vascular Responses

Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics. Phonotactics defines possible combinations of p...

متن کامل

The abstract representations in speech processing.

Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable....

متن کامل

Phonological codes as early sources of constraint in Chinese word identification: A review of current discoveries and theoretical accounts

A written Chinese character has a more direct connection with its meaning than a written word in English does. Moreover, because there is no unit in the writing system that encodes single phonemes, grapheme-phoneme mappings are impossible. These unique features have led some researchers to speculate that phonological processing does not occur in visual identification of Chinese words or that me...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001